21 research outputs found

    Biologically vs. logic inspired encoding of facial actions and emotions in video

    No full text

    Cost-effective solution to synchronised audio-visual data capture using multiple sensors

    Get PDF
    Applications such as surveillance and human behaviour analysis require high- bandwidth recording from multiple cameras, as well as from other sensors. In turn, sensor fusion has increased the required accuracy of synchronisation be- tween sensors. Using commercial off-the-shelf components may compromise quality and accuracy, because it is difficult to handle the combined data rate from multiple sensors, the offset and rate discrepancies between independent hardware clocks, the absence of trigger inputs or -outputs in the hardware, as well as the different methods for timestamping the recorded data. To achieve accurate synchronisation, we centralise the synchronisation task by recording all trigger- or timestamp signals with a multi-channel audio interface. For sensors that don’t have an external trigger signal, we let the computer that captures the sensor data periodically generate timestamp signals from its se- rial port output. These signals can also be used as a common time base to synchronise multiple asynchronous audio interfaces. Furthermore, we show that a consumer PC can currently capture 8-bit video data with 1024x1024 spatial- and 59.1Hz temporal resolution, from at least 14 cameras, together with 8 channels of 24-bit audio at 96kHz. We thus improve the quality/cost ratio of multi-sensor systems data capture systems

    EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions.

    Get PDF
    The EmoPain 2020 Challenge is the first international competition aimed at creating a uniform platform for the comparison of machine learning and multimedia processing methods of automatic chronic pain assessment from human expressive behaviour, and also the identification of pain-related behaviours. The objective of the challenge is to promote research in the development of assistive technologies that help improve the quality of life for people with chronic pain via real-time monitoring and feedback to help manage their condition and remain physically active. The challenge also aims to encourage the use of the relatively underutilised, albeit vital bodily expression signals for automatic pain and pain-related emotion recognition. This paper presents a description of the challenge, competition guidelines, bench-marking dataset, and the baseline systems' architecture and performance on the three sub-tasks: pain estimation from facial expressions, pain recognition from multimodal movement, and protective movement behaviour detection.PSRC grant Emotion& Pain Project; NIHR Nottingham Biomedical Research Centr

    FERA 2017 - Addressing head pose in the third facial expression recognition and analysis challenge

    Get PDF
    The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges

    A computational framework for measuring the facial emotional expressions

    No full text
    NoThe purpose of this chapter is to discuss and present a computational framework for detecting and analysing facial expressions efficiently. The approach here is to identify the face and estimate regions of facial features of interest using the optical flow algorithm. Once the regions and their dynamics are computed a rule based system can be utilised for classification. Using this framework, we show how it is possible to accurately identify and classify facial expressions to match with FACS coding and to infer the underlying basic emotions in real time

    A method for location based search for enhancing facial feature design

    No full text
    NoIn this paper we present a new method for accurate real-time facial feature detection. Our method is based on local feature detection and enhancement. Previous work in this area, such as that of Viola and Jones, require looking at the face as a whole. Consequently, such approaches have increased chances of reporting negative hits. Furthermore, such algorithms require greater processing power and hence they are especially not attractive for real-time applications. Through our recent work, we have devised a method to identify the face from real-time images and divide it into regions of interest (ROI). Firstly, based on a face detection algorithm, we identify the face and divide it into four main regions. Then, we undertake a local search within those ROI, looking for specific facial features. This enables us to locate the desired facial features more efficiently and accurately. We have tested our approach using the Cohn-Kanade’s Extended Facial Expression (CK+) database. The results show that applying the ROI has a relatively low false positive rate as well as provides a marked gain in the overall computational efficiency. In particular, we show that our method has a 4-fold increase in accuracy when compared to existing algorithms for facial feature detection
    corecore